Speech-based and multimodal media center for different user groups
نویسندگان
چکیده
We present a multimodal media center interface based on speech input, gestures, and haptic feedback. For special user groups, including visually and physically impaired users, the application features a zoomable context + focus GUI in tight combination with speech output and full speech-based control. These features have been developed in cooperation with representatives of the user groups. Evaluations of the system with regular users have been conducted and results from a study where subjective evaluations were collected show that the performance and user experience of speech input were very good, similar to results from a ten month public pilot use.
منابع مشابه
Multimodal Media Center Interface Based on Speech, Gestures and Haptic Feedback
We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users. Finally, we present the key results from i...
متن کاملModeling different decision strategies in a time tabled multimodal route planning by integrating the quantifier-guided OWA operators, fuzzy AHP weighting method and TOPSIS
The purpose of Multi-modal Multi-criteria Personalized Route Planning (MMPRP) is to provide an optimal route between an origin-destination pair by considering weights of effective criteria in a way this route can be a combination of public and private modes of transportation. In this paper, the fuzzy analytical hierarchy process (fuzzy AHP) and the quantifier-guided ordered weighted averaging (...
متن کاملSpeak4it and the Multimodal Semantic Interpretation System
Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say “gas stations” while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretati...
متن کاملMultimodal Interaction with Speech, Gestures and Haptic Feedback in a Media Center Application
We demonstrate interaction with a multimodal media center application. Mobile phone-based interface includes speech and gesture input and haptic feedback. The setup resembles our long-term public pilot study, where a living room environment containing the application was constructed inside a local media museum allowing visitors to freely test the system.
متن کاملMultimodal Mood-based Annotation
The paper presents an architecture for multimodal mood-based annotation systems. The architecture aims at the implementation of interactive multimodal systems to support communities of users in the creation and management of annotations in locative media projects. The annotations are multimodal in that they can be created and accessed through visual and audio interaction. The annotations are mo...
متن کامل